Maximum Entropy Gibbs Density Modeling for Pattern Classification
نویسندگان
چکیده
Recent studies have shown that the Gibbs density function is a good model for visual patterns and that its parameters can be learned from pattern category training data by a gradient algorithm optimizing a constrained entropy criterion. These studies represented each pattern category by a single density. However, the patterns in a category can be so complex as to require a representation spread over several densities to more accurately account for the shape of their distribution in the feature space. The purpose of the present study is to investigate a representation of visual pattern category by several Gibbs densities using a Kohonen neural structure. In this Gibbs density based Kohonen network, which we call a Gibbsian Kohonen network, each node stores the parameters of a Gibbs density. Collectively, these Gibbs densities represent the pattern category. The parameters are learned by a gradient update rule so that the corresponding Gibbs densities maximize entropy subject to reproducing observed feature statistics of the training patterns. We verified the validity of the method and the efficiency of the ensuing Gibbs density pattern representation on a handwritten character recognition application. Entropy 2012, 14 2479
منابع مشابه
Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price
In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf) estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is consi...
متن کاملLearning in Gibbsian Fields: How Accurate and How Fast Can It Be?
ÐGibbsian fields or Markov random fields are widely used in Bayesian image analysis, but learning Gibbs models is computationally expensive. The computational complexity is pronounced by the recent minimax entropy (FRAME) models which use large neighborhoods and hundreds of parameters [22]. In this paper, we present a common framework for learning Gibbs models. We identify two key factors that ...
متن کاملLearning Inhomogeneous Gibbs Model of Faces by Minimax Entropy
In this paper we propose a novel inhomogeneous Gibbs model by the minimax entropy principle, and apply it to face modeling. The maximum entropy principle generalizes the statistical properties of the observed samples and results in the Gibbs distribution, while the minimum entropy principle makes the learnt distribution close to the observed one. To capture the fine details of a face, an inhomo...
متن کاملLog-concavity and the maximum entropy property of the Poisson distribution
We prove that the Poisson distribution maximises entropy in the class of ultralog-concave distributions, extending a result of Harremoës. The proof uses ideas concerning log-concavity, and a semigroup action involving adding Poisson variables and thinning. We go on to show that the entropy is a concave function along this semigroup. 1 Maximum entropy distributions It is well-known that the dist...
متن کاملMinimum Cross-entropy Methods for Rare-event Simulation
In this paper we apply the minimum cross-entropy method (MinxEnt) for estimating rare-event probabilities for the sum of i.i.d. random variables. MinxEnt is an analogy of the Maximum Entropy Principle in the sense that the objective is to minimize a relative (or cross) entropy of a target density h from an unknown density f under suitable constraints. The main idea is to use the solution to thi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Entropy
دوره 14 شماره
صفحات -
تاریخ انتشار 2012